---
title: September 2023
description: Read about DataRobot's new public preview and generally available features released in September, 2023.

---

# September 2023 {: #september-2023 }

_September 27, 2023_

With the latest deployment, DataRobot's AI Platform delivered the new GA and Public Preview features listed below. From the release center you can also access:

* [Monthly deployment announcement history](cloud-history/index)
* [Public preview features](public-preview/index)
* [Self-Managed AI Platform release notes](archive-release-notes/index)

### In the spotlight {: #in-the-spotlight }

#### Compare models across experiments from a single view  {: #compare-models-across-experiments-from-a-single-view }

Solving a business problem with Machine Learning is an iterative process and involves running many experiments to test ideas and confirm assumptions. To simplify the iteration process, Workbench introduces model Comparison&mdash;a tool that allows you to compare up to three models, side-by-side, from any number of experiments within a single Use Case. Now, instead of having to look at each experiment individually and record metrics for later comparison, you can compare models across experiments in a single view.

The comparison Leaderboard is accessible from any project in Workbench. It can be filtered to more easily locate and select models, compare models across different insights, and view and compare metadata for the selected models. The Comparison tab is a Public Preview feature, on by default.

The video below provides a very quick overview of the comparison functionality.

**Feature flag ON by default:** Enable Use Case Leaderboard Compare

Public preview [documentation](wb-model-compare).

<video width="100%" height="auto" poster="../../images/sept-modelcompare.png" controls>
  <source src="../../images/sept-modelcompare.mp4" type="video/mp4">
</video>

#### New Google BigQuery connector added {: #new-google-bigquery-connector-added }

The new BigQuery connector is now generally available in DataRobot. In addition to performance enhancements and [Service Account authentication](dc-bigquery), this connector also enables support for BigQuery in Workbench, allowing you to:

- [Create and configure](wb-connect) data connections.
- Add BigQuery datasets to a Use Case.
- [Wrangle BigQuery datasets](wb-wrangle-data/index), and then publish recipes to BigQuery to materialize the output in the Data Registry.

<video width="100%" height="auto" poster="../../images/sept-bigquery.png" controls>
  <source src="../../images/sept-bigquery.mp4" type="video/mp4">
</video>

### September release {: #september-release }

The following table lists each new feature:

??? abstract "Features grouped by capability"

    Name       |  GA | Public Preview
    ---------- | ---- | ---
    **Data** |  :~~:  | :~~:
    [New Google BigQuery connector added](#new-google-bigquery-connector-added) | ✔ |  
    [Materialize Workbench datasets in Google BigQuery](#materialize-workbench-datasets-in-google-bigquery)|  | ✔
    [Azure Databricks support added to DataRobot](#azure-databricks-support-added-to-datarobot) |  | ✔
    **Modeling** |  :~~:  | :~~:
    [Compare models across experiments from a single view](#compare-models-across-experiments-from-a-single-view) |  | ✔
    [Workbench time-aware capabilities expanded to include time series modeling](#workbench-time-aware-capabilities-expanded-to-include-time-series-modeling) |  | ✔
    [Period Accuracy now available in Workbench and DataRobot Classic](#period-accuracy-now-available-in-Workbench-and-datarobot-classic) | ✔ |  
    [Leaderboard Data and Feature List tabs added to Workbench](#leaderboard-data-and-feature-list-tabs-added-to-workbench) |  | ✔
    **Applications** |  :~~:  | :~~:
    [What-if and Optimizer available in the new app experience](#what-if-and-optimizer-available-in-the-new-app-experience) |  | ✔
       **Predictions and MLOps** |  :~~:  | :~~:
    [Expanded prediction and monitoring job definition access](#expanded-prediction-and-monitoring-job-definition-access) | ✔ |  
    [Real-time notifications for deployments](#real-time-notifications-for-deployments) |  | ✔
    [Custom jobs in the Model Registry](#custom-jobs-in-the-model-registry) |  | ✔
    [Hosted custom metrics](#hosted-custom-metrics) |  | ✔


### GA {: #ga }

#### Period Accuracy now available in Workbench and DataRobot Classic {: #period-accuracy-now-available-in-Workbench-and-datarobot-classic }

Period Accuracy is an insight that lets you define periods within your dataset and then compare their metric scores against the metric score of the model as a whole. It is now generally available for all time series projects. In DataRobot [Classic](ts-period-accuracy), the feature can be found in the **Evaluate > Period Accuracy** tab. For [Workbench](ml-experiment-evaluate#period-accuracy), find the insight under **Experiment information**. The insight is also available for [time-aware](wb-ts-experiment/index) experiments

![](images/period-accuracy-wb1.png)

#### Expanded prediction and monitoring job definition access {: #expanded-prediction-and-monitoring-job-definition-access }

This release expands role-based access controls (RBAC) for [prediction](manage-pred-job-def) and [monitoring](manage-monitoring-job-def) jobs to align with deployment permissions. Previously, when deployments were shared between users, job definitions and batch jobs weren’t shared alongside the deployment. With this update, the _User_ role gains read access to prediction and monitoring job definitions associated with any deployments shared with them. The _Owner_ role gains read and write access to prediction and monitoring job definitions associated with any deployments shared with them. For more information on the capabilities of deployment _Users_ and _Owners_, review the [Roles and permissions documentation](roles-permissions#shared-deployment-job-roles). Shared job definitions appear alongside your own; however, if you don't have access to the credentials associated with a prediction Source or Destination in the AI Catalog, the connection details are **[redacted]**:

![](images/batch-pred-job-def-rbac.png)

For more information, see the documentation for [Shared prediction job definitions](manage-pred-job-def#shared-job-definitions) and [Shared monitoring job definitions](manage-monitoring-job-def#shared-job-definitions).

### Public Preview {: #public-preview }

#### Materialize Workbench datasets in Google BigQuery {: #materialize-workbench-datasets-in-google-bigquery }

Now available for Preview, you can materialize wrangled datasets in the Data Registry as well as BigQuery. To enable this option, wrangle a BigQuery dataset in Workbench, click **Publish**, and select **Publish to BigQuery** in the **Publishing Settings** modal.

Note that you must establish a new connection to BigQuery to use this feature.

Public preview [documentation](wb-pub-recipe#publish-to-your-data-source).

**Feature flag(s) ON by default:**

- Enable BigQuery In-Source Materialization in Workbench
- Enable Dynamic Datasets in Workbench

#### Azure Databricks support added to DataRobot {: #azure-databricks-support-added-to-datarobot }

Support for Azure Databricks has been added to both DataRobot Classic and Workbench, allowing you to:

- Create and configure data connections.
- Add Azure Databricks datasets.

Public preview [documentation](wb-databricks).

**Feature flag OFF by default:** Enable Databricks Driver

#### Workbench time-aware capabilities expanded to include time series modeling {: #workbench-time-aware-capabilities-expanded-to-include-time-series-modeling }

With this deployment, DataRobot users can now use date/time partitioning to build time series-based experiments. Support for time series setup, modeling, and insights extend date/time partitioning, bringing forecasting capabilities to Workbench. With a significantly more streamlined workflow, including a simple window settings modal with graphic visualization, Workbench users can easily set up time series experiments.

![](images/rn-wb-ts-exp-18.png)

After modeling, all time series insights will be available, as well as experiment summary data that provides a [backtest summary and partitioning log] (ts-experiment-evaluate#partitioning-details). Additionally:

With feature lists and dataset views, you can see the results of feature extraction and reduction.

Because Quick mode trains only the most crucial blueprints, you can build more niche or long-running time series models, manually, from the blueprint repository.

Public preview [documentation](wb-ts-experiment/index) to learn how to create, evaluate, and train new models.

**Feature flags ON by default:**

* Enable Date/Time Partitioning (OTV) in Workbench
* Enable Workbench for Time Series Projects

#### Leaderboard Data and Feature List tabs added to Workbench {: #leaderboard-data-and-feature-list-tabs-added-to-workbench }

This deployment brings the addition of two new tabs to the experiment info displayed on the Leaderboard:

* The **Data** tab provides summary analytics of the data used in the project.

* The **Feature lists** tab lists feature lists built for the experiment and available for model training.

![](images/wb-exp-eval-32.png)

**Feature flag ON by default:** Enable New No-Code AI Apps Edit Mode

Public preview [documentation](ml-experiment-evaluate#view-experiment-info).

#### What-if and Optimizer available in the new app experience {: #what-if-and-optimizer-available-in-the-new-app-experience }

In the new Workbench app experience, you can now interact with prediction results using the what-if and optimizer widget, which provides both a scenario comparison and optimizer tool.

Make sure the tool(s) you want to use are enabled in **Edit** mode. Then, click **Present**, select a prediction row in the **All rows** widget, and scroll down to **What-if and Optimizer**. From here, you can create new scenarios and view the optimized outcome.

=== "Chart view"

    ![](images/app-rn-1.png)

=== "Table view"

    ![](images/app-rn-2.png)

Public preview [documentation](wb-app-edit#prediction-details).

**Feature flag ON by default:** Enable New No-Code AI Apps Edit Mode


#### Real-time notifications for deployments {: #real-time-notifications-for-deployments }

DataRobot provides automated monitoring with a [notification system](deploy-notifications), allowing you to configure alerts triggered when service health, data drift status, model accuracy, or fairness values deviate from your organization's accepted values. Now available for public preview, you can enable real-time notifications for these status alerts, allowing your organization to quickly respond to changes in model health without waiting for scheduled health status notifications:

![](images/real-time-notify.png)

Public preview [documentation](real-time-deployment-notifications).

**Feature flag OFF by default**: Enable Real-time Notifications for Deployments

#### Custom jobs in the Model Registry {: #custom-jobs-in-the-model-registry }

Now available as a public preview feature, you can create custom jobs in the Model Registry to implement automation (for example, custom tests) for your models and deployments. Each job serves as an automated workload, and the exit code determines if it passed or failed. You can run the custom jobs you create for one or more models or deployments. The automated workload you define when you assemble a custom job can make prediction requests, fetch inputs, and store outputs using DataRobot's Public API.

![](images/rn-custom-jobs.png)

Public preview [documentation](custom-jobs).

**Feature flag OFF by default**: Enable Custom Jobs

#### Hosted custom metrics {: #hosted-custom-metrics }

Now available as a public preview feature, you can not only implement up to five of your organization's custom metrics into a deployment, but also upload and host code using DataRobot Notebooks to easily add custom metrics to other deployments. After configuring a custom metric, DataRobot loads a notebook that contains the code for the metric. The notebook contains one custom metric cell, a unique type of notebook cell that contains Python code defining how the metric is exported and calculated, code for scoring, and code to populate the metric.

![](images/hosted-custom-metrics-rn.png)

Public preview [documentation](hosted-custom-metrics).

**Feature flags OFF by default:**

* Enable Hosted Custom Metrics
* Enable Custom Jobs
* Enable Notebooks Custom Environments

_All product and company names are trademarks&trade; or registered&reg; trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them_.
